27 research outputs found

    Toric Eigenvalue Methods for Solving Sparse Polynomial Systems

    Get PDF
    We consider the problem of computing homogeneous coordinates of points in a zero-dimensional subscheme of a compact toric variety XX. Our starting point is a homogeneous ideal II in the Cox ring of XX, which gives a global description of this subscheme. It was recently shown that eigenvalue methods for solving this problem lead to robust numerical algorithms for solving (nearly) degenerate sparse polynomial systems. In this work, we give a first description of this strategy for non-reduced, zero-dimensional subschemes of XX. That is, we allow isolated points with arbitrary multiplicities. Additionally, we investigate the regularity of II to provide the first universal complexity bounds for the approach, as well as sharper bounds for weighted homogeneous, multihomogeneous and unmixed sparse systems, among others. We disprove a recent conjecture regarding the regularity and prove an alternative version. Our contributions are illustrated by several examples.Comment: 41 pages, 7 figure

    Unravelling stellar populations in the Andromeda Galaxy

    Full text link
    To understand the history and formation mechanisms of galaxies it is crucial to determine their current multidimensional structure. Here we focus on stellar population properties, such as metallicity and [α\alpha/Fe] enhancement. We devise a new technique to recover the distribution of these parameters using spatially resolved, line-of-sight averaged data. Our chemodynamical method is based on the made-to-measure (M2M) framework and results in an NN-body model for the abundance distribution. We test our method on a mock data set and find that the radial and azimuthal profiles are well-recovered, however only the overall shape of the vertical profile matches the true profile. We apply our procedure to spatially resolved maps of mean [Z/H] and [α\alpha/Fe] for the Andromeda Galaxy, using an earlier barred dynamical model of M31. We find that the metallicity is enhanced along the bar, with possible maxima at the ansae. In the edge-on view the [Z/H] distribution has an X shape due to the boxy/peanut bulge; the average vertical metallicity gradient is equal to 0.133±0.006-0.133\pm0.006 dex/kpc. We identify a metallicity-enhanced ring around the bar, which also has relatively lower [α\alpha/Fe]. The highest [α\alpha/Fe] is found in the centre, due to the classical bulge. Away from the centre, the α\alpha-overabundance in the bar region increases with height, which could be an indication of a thick disc. We argue that the galaxy assembly resulted in a sharp peak of metallicity in the central few hundred parsecs and a more gentle negative gradient in the remaining disc, but no [α\alpha/Fe] gradient. The formation of the bar lead to the re-arrangement of the [Z/H] distribution, causing a flat gradient along the bar. Subsequent star formation close to the bar ends may have produced the metallicity enhancements at the ansae and the [Z/H] enhanced lower-α\alpha ring.Comment: 20 pages, 17 figures, 1.9MB, accepted to A&

    A nearly optimal algorithm to decompose binary forms

    Get PDF
    Accepted to JSCSymmetric tensor decomposition is an important problem with applications in several areas for example signal processing, statistics, data analysis and computational neuroscience. It is equivalent to Waring's problem for homogeneous polynomials, that is to write a homogeneous polynomial in n variables of degree D as a sum of D-th powers of linear forms, using the minimal number of summands. This minimal number is called the rank of the polynomial/tensor. We focus on decomposing binary forms, a problem that corresponds to the decomposition of symmetric tensors of dimension 2 and order D. Under this formulation, the problem finds its roots in invariant theory where the decompositions are known as canonical forms. In this context many different algorithms were proposed. We introduce a superfast algorithm that improves the previous approaches with results from structured linear algebra. It achieves a softly linear arithmetic complexity bound. To the best of our knowledge, the previously known algorithms have at least quadratic complexity bounds. Our algorithm computes a symbolic decomposition in O(M(D)log(D))O(M(D) log(D)) arithmetic operations, where M(D)M(D) is the complexity of multiplying two polynomials of degree D. It is deterministic when the decomposition is unique. When the decomposition is not unique, our algorithm is randomized. We present a Monte Carlo version of it and we show how to modify it to a Las Vegas one, within the same complexity. From the symbolic decomposition, we approximate the terms of the decomposition with an error of 2ε2^{−ε} , in O(Dlog2(D)(log2(D)+log(ε)))O(D log^2(D) (log^2(D) + log(ε))) arithmetic operations. We use results from Kaltofen and Yagati (1989) to bound the size of the representation of the coefficients involved in the decomposition and we bound the algebraic degree of the problem by min(rank, D − rank + 1). We show that this bound can be tight. When the input polynomial has integer coefficients, our algorithm performs, up to poly-logarithmic factors, Obit(D+D4+D3τ)O_{bit} (Dℓ + D^4 + D^3 τ) bit operations, where ττ is the maximum bitsize of the coefficients and 22^{−ℓ} is the relative error of the terms in the decomposition

    Automatización del aprendizaje de máquinas (“Automl”): logros y obstáculos

    Get PDF
    La Automatización del Aprendizaje de Máquinas (AutoML) pretende complementar o simular la tarea de los expertos en Aprendizaje Automático en el desarrollo de un proceso de Minería de Datos. El avance reciente en el Aprendizaje Automático y las ventajas competitivas que permite el descubrimiento de conocimiento en los datos generan un auge por el desarrollo de aplicaciones que automaticen las tareas de este flujo de trabajo.Es por eso que existe una creciente comunidad generada en torno a la creación de herramientas que automatizan estas tareas, cuyo éxito hoy día depende fundamentalmente de expertos en Machine Learning, quienes preprocesan los datos, construyen los modelos eligiendo los algoritmos apropiados y configuran sus hiperparámetros.Diversas son las técnicas con las cuales se pretende automatizar estas actividades, que en el caso de los expertos humanos es llevada a cabo con conocimiento, intuición, juicio y razonamiento.Un tema a considerar en la evaluación de estas técnicas es que la productividad en la Minería de Datos no es una cuestión cuantitativa, es más bien un problema de la calidad de lo que los procesos produzcan. En el contexto de conocimiento de los datos, la calidad se refiere a la validez y relevancia de los patrones que los modelos pueden descubrir a partir de los datos. Entonces, será interesante saber:¿Qué sucederá cuando se automatice el trabajo de todos estos expertos?¿Qué pasará con la calidad cuando se “democratice” aún más el campo del Aprendizaje Automático proporcionando a cualquier persona las herramientas de análisis automático?Este proyecto de investigación se propuso como objetivo general el relevamiento del alcance y de la eficacia de las herramientas de AutoML disponibles, buscando que los resultados permitieran contribuir al conocimiento del estado del arte de esta incipiente área, cuantificar el grado de eficacia que tienen las herramientas existentes e identificar áreas de mejora para la automatización de esta ciencia que ha cobrado tanta importancia recientemente.Para ello, se desarrolló un sistema de métricas que permitiera relevar el alcance y las capacidades de automatización de los frameworks. Se seleccionaron los de mayor uso y se identificaron conjuntos de datos a ser utilizados como muestra de los problemas que estas herramientas permiten resolver.Se efectuó una prueba comparativa que evaluara la performance para la resolución de los problemas seleccionados.Como resultado del trabajo, se elaboró el documento “Evaluación Comparativa de Herramientas AutoML de Código Abierto”, enviado para su evaluación al CoNaIISI 2019 – 7. ° Congreso Nacional de Ingeniería Informática – Sistemas de Información. El paper fue aprobado y presentado en dichocongreso internacional. Allí se contrastaron los valores obtenidos por las tres herramientas evaluadas y por la línea base establecida. El análisis sobre estos permitió concluir que todas las herramientas mostraron cierto nivel de eficacia y que lograron mejores resultados que los que un usuario obtendríade forma básica. El trabajo incluyó una segunda instancia de evaluación, y allí las diferencias no resultaron significativas. Afirmamos que entre sus posibles causas podría estar la sobreadaptación de los pipelines generados, lo cual consideramos una interesante línea de investigación futura. Alcontrastar los resultados de las herramientas entre sí, no observamos diferencias significativas entre ellas. El trabajo también incluyó un detalle de la experiencia con dichas herramientas, y entre las conclusiones se mencionaron las principales características de cada una: la generación de modelosensamblados y el módulo de inicio rápido de Auto-sklearn, la posibilidad de exportación del modelo de TPOT para ser utilizado sin dependencias, y la facilidad de uso de Auto-WEKA, que permite obtener buenos resultados con un solo clic

    Gröbner Basis over Semigroup Algebras: Algorithms and Applications for Sparse Polynomial Systems

    Get PDF
    International audienceGröbner bases is one the most powerful tools in algorithmic non-linear algebra. Their computation is an intrinsically hard problem with a complexity at least single exponential in the number of variables. However, in most of the cases, the polynomial systems coming from applications have some kind of structure. For example , several problems in computer-aided design, robotics, vision, biology , kinematics, cryptography, and optimization involve sparse systems where the input polynomials have a few non-zero terms. Our approach to exploit sparsity is to embed the systems in a semigroup algebra and to compute Gröbner bases over this algebra. Up to now, the algorithms that follow this approach benefit from the sparsity only in the case where all the polynomials have the same sparsity structure, that is the same Newton polytope. We introduce the first algorithm that overcomes this restriction. Under regularity assumptions, it performs no redundant computations. Further, we extend this algorithm to compute Gröbner basis in the standard algebra and solve sparse polynomials systems over the torus (C)n(C^*)^n. The complexity of the algorithm depends on the Newton polytopes

    Mejora de la búsqueda de cursos de acuerdo al perfil de usuario en Moodle

    Get PDF
    En la actualidad el aprendizaje a distancia está cada vez más difundido, por lo cual se requieren nuevas técnicas de almacenamiento, búsqueda y recuperación de documentos que respondan eficientemente a esta realidad. Es necesario para ello, explotar distintos datos de contexto, como las características y las preferencias del usuario. Existen diferentes plataformas de aprendizaje que intentan facilitar la interacción entre el alumno y el docente, a fin de hacer confortable este aprendizaje a distancia. La propuesta de este trabajo es mejorar la búsqueda de cursos en estas plataformas aprovechando datos de contexto para personalizar los resultados, presentando al alumno únicamente los cursos que sean considerados de su interés y ordenados de acuerdo a sus preferencias. Con ese fin, se intenta aprender automáticamente de los casos históricos disponibles, para lo cual se utiliza el algoritmo de k-vecinos más cercanos con pesos. Las experimentaciones realizadas en la plataforma Moodle, muestran que se obtienen valores de 95% para la Precisión de la búsqueda, de 90% para el Recall y de casi el 95% en la evaluación de la Calidad del ordenamiento.Red de Universidades con Carreras en Informática (RedUNCI

    Post-weaning housing conditions influence freezing during contextual fear conditioning in adult rats

    Get PDF
    The present study aimed to investigate the influence of housing conditions on contextual fear memory malleability. Male Wistar rats were housed in enriched, standard, or impoverished conditions after weaning and remained in these conditions throughout the entire experiment. After six weeks into those housing conditions, all animals underwent a 3-day protocol including contextual fear conditioning (day 1), memory reactivation followed by systemic administration of midazolam or vehicle (day 2), and a retention test (day 3). Percentage freezing was used as a behavioral measure of contextual fear. There was no evidence for an effect of housing conditions on the sensitivity of contextual fear memory to amnestic effects of post-reactivation midazolam administration, and no indication for amnestic effects of post-reactivation midazolam overall (including in the standard group). The inability to replicate previous demonstrations of post-reactivation amnesia using the same protocol underscores the subtle nature of post-reactivation pharmacological memory interference. Notably, impoverished housing resulted in a decrease in contextual freezing during contextual fear conditioning, reactivation and retention testing, compared to enriched and standard housing conditions. This observation warrants caution when interpreting the results from experiments regarding effects of housing on fear memory processes, particularly when freezing is used as a measure of fear.Fil: Schroyens, Natalie. Katholikie Universiteit Leuven; BélgicaFil: Bender, Crhistian Luis. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Farmacología Experimental de Córdoba. Universidad Nacional de Córdoba. Facultad de Ciencias Químicas. Instituto de Farmacología Experimental de Córdoba; ArgentinaFil: Alfei Palloni, Joaquin Matías. Katholikie Universiteit Leuven; BélgicaFil: Molina, Víctor Alejandro. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Farmacología Experimental de Córdoba. Universidad Nacional de Córdoba. Facultad de Ciencias Químicas. Instituto de Farmacología Experimental de Córdoba; ArgentinaFil: Luyten, Laura. Katholikie Universiteit Leuven; BélgicaFil: Beckers, Tom. Katholikie Universiteit Leuven; Bélgic

    A Superfast Randomized Algorithm to Decompose Binary Forms

    Get PDF
    International audienceSymmetric Tensor Decomposition is a major problem that arises in areas such as signal processing, statistics, data analysis and computational neuroscience. It is equivalent to write a homogeneous polynomial in n variables of degree D as a sum of D-th powers of linear forms, using the minimal number of summands. This minimal number is called the rank of the polynomial/tensor. We consider the decomposition of binary forms, that corresponds to the decomposition of symmetric tensors of dimension 2 and order D. This problem has its roots in Invariant Theory, where the decom-positions are known as canonical forms. As part of that theory, different algorithms were proposed for the binary forms. In recent years, those algorithms were extended for the general symmetric tensor decomposition problem. We present a new randomized algorithm that enhances the previous approaches with results from structured linear algebra and techniques from linear recurrent sequences. It achieves a softly linear arithmetic complexity bound. To the best of our knowledge, the previously known algorithms have quadratic complexity bounds. We compute a symbolic minimal decomposition in O(M(D) log(D)) arithmetic operations, where M(D) is the complexity of multiplying two polynomials of degree D. We approximate the terms of the decomposition with an error of 2 −ε , in O D log 2 (D) log 2 (D) + log(ε) arithmetic operations. To bound the size of the representation of the coefficients involved in the decomposition, we bound the algebraic degree of the problem by min(rank, D − rank + 1). When the input polynomial has integer coefficients, our algorithm performs, up to poly-logarithmic factors, O B (D + D 4 + D 3 τ) bit operations, where τ is the maximum bitsize of the coefficients and 2 − is the relative error of the terms in the decomposition
    corecore